We propose a novel task, G4C (Goal-driven Guidance Generation in Grounded Communication), for studying goal-driven and grounded natural language interactions. Specifically, we choose Dungeons and Dragons (D&D) -- a role-playing game consisting of multiple player characters and a Dungeon Master (DM) who collaborate to achieve a set of goals that are beneficial to the players -- as a testbed for this task. Here, each of the player characters is a student, with their own personas and abilities, and the DM is the teacher, an arbitrator of the rules of the world and responsible for assisting and guiding the students towards a global goal. We propose a theory-of-mind-inspired methodology for training such a DM with reinforcement learning (RL), where a DM: (1) learns to predict how the players will react to its utterances using a dataset of D&D dialogue transcripts; and (2) uses this prediction as a reward function providing feedback on how effective these utterances are at guiding the players towards a goal. Human and automated evaluations show that a DM trained with RL to generate guidance by incorporating a theory-of-mind of the players significantly improves the players' ability to achieve goals grounded in their shared world.
translated by 谷歌翻译
The ability to associate touch with sight is essential for tasks that require physically interacting with objects in the world. We propose a dataset with paired visual and tactile data called Touch and Go, in which human data collectors probe objects in natural environments using tactile sensors, while simultaneously recording egocentric video. In contrast to previous efforts, which have largely been confined to lab settings or simulated environments, our dataset spans a large number of "in the wild" objects and scenes. To demonstrate our dataset's effectiveness, we successfully apply it to a variety of tasks: 1) self-supervised visuo-tactile feature learning, 2) tactile-driven image stylization, i.e., making the visual appearance of an object more consistent with a given tactile signal, and 3) predicting future frames of a tactile signal from visuo-tactile inputs.
translated by 谷歌翻译
Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
时间序列数据出现在各种应用程序中,例如智能运输和环境监测。时间序列分析的基本问题之一是时间序列预测。尽管最近的深度时间序列预测方法取得了成功,但它们仍需要足够的历史价值观察才能进行准确的预测。换句话说,输出长度(或预测范围)与输入和输出长度之和的比率应足够低(例如,0.3)。随着比率的增加(例如,到0.8),预测准确性的不确定性显着增加。在本文中,我们从理论和经验上都表明,通过将相关时间序列检索作为参考文献可以有效地降低不确定性。在理论分析中,我们首先量化不确定性,并显示其与平方误差(MSE)的连接。然后,我们证明,带有参考的模型比没有参考的模型更容易学习,因为检索到的参考可能会降低不确定性。为了凭经验证明基于检索的时间序列预测模型的有效性,我们引入了一种简单而有效的两阶段方法,称为“保留”,该方法由关系检索和内容合成组成。我们还表明,可以轻松地适应时空时间序列和时间序列插补设置。最后,我们评估了现实世界数据集上的延迟,以证明其有效性。
translated by 谷歌翻译
食品图像分类是基于图像的饮食评估的基础,以预测食物类别。由于现实生活中有许多不同的食品类别,因此传统模型无法达到足够高的准确性。个性化分类器旨在在很大程度上提高每个人的食物图像分类的准确性。但是,缺乏公共个人食品消费数据被证明是培训此类模型的挑战。为了解决这个问题,我们提出了一个新颖的框架,以模拟个人食品消耗数据模式,利用修改后的马尔可夫链模型和自我监督的学习。我们的方法能够从有限的初始数据中创建准确的未来数据模式,并且我们的模拟数据模式可以与初始数据模式密切相关。此外,我们使用动态的时间翘曲距离和Kullback-Leibler Divergence作为指标来评估我们方法对公共食品-101数据集中的有效性。我们的实验结果表明,与随机模拟和原始马尔可夫链方法相比,表现出色。
translated by 谷歌翻译
减少甲烷排放对于缓解全球变暖至关重要。为了将甲烷排放归因于其来源,有必要综合的甲烷源基础设施数据集。深入学习远程感知的图像的最新进展有可能识别甲烷源的位置和特征,但是缺乏公开可用的数据,可以使机器学习研究人员和从业人员能够构建自动映射方法。为了帮助填补这一空白,我们在美国构建了一个称为Meter-ML的多传感器数据集,该数据集包含86,625个地理参考的NAIP,Sentinel-1和Sentinel-2图像,并在美国标记为有甲烷源设施,包括甲烷源设施,包括集中动物喂养操作,,,,,,,包括浓缩动物喂养操作,煤矿,垃圾填埋场,天然气加工厂,炼油厂和石油末端以及废水处理厂。我们尝试各种模型,以利用不同的空间分辨率,空间足迹,图像产品和光谱带。我们发现,我们的最佳模型在确定浓缩动物喂养操作的精确召回曲线下达到了一个面积,在专家标签的测试集上,用于识别浓缩动物饲养操作,用于油炼油厂和石油末端0.821,这表明有可能进行大规模映射。我们在https://stanfordmlgroup.github.io/projects/meter-ml/上免费提供仪表-ML,以支持自动化甲烷源映射的未来工作。
translated by 谷歌翻译
自主代理在Atari Games等专业领域取得了长足的进步。但是,他们通常在具有有限和手动构想的目标的孤立环境中学习Tabula Rasa,因此未能跨越各种任务和能力。受到人类如何不断学习和适应开放世界的启发,我们主张建立通才代理的三位一体:1)一个支持多种任务和目标的环境,2)多模式知识的大规模数据库和3个数据库)灵活且可扩展的代理体系结构。我们介绍了MinedoJo,这是一个建立在流行的Minecraft游戏上的新框架,该游戏具有模拟套件,其中包含数千种不同的开放式任务,以及带有Minecraft视频,教程,Wiki页面和论坛讨论的Internet规模知识库。使用Minedojo的数据,我们提出了一种新型的代理学习算法,该算法利用大型预训练的视频语言模型作为学习的奖励功能。我们的代理商能够解决以自由形式的语言指定的各种开放式任务,而无需任何手动设计的密集塑造奖励。我们开源的仿真套件和知识库(https://minedojo.org),以促进研究的研究,以通常具有能力的体现药物的目标。
translated by 谷歌翻译
大型语言模型在各种问题答案(QA)基准测试方面取得了高度的性能,但其产出的解释性仍然难以捉摸。最近建议将结构化的解释称为“综合树”,以解释和检查质量检查系统的答案。为了更好地生成此类树木,我们提出了一种称为迭代检索生成推理​​器(IRGR)的架构。我们的模型能够通过系统地生成文本前提的分步解释来解释给定的假设。 IRGR模型迭代地搜索合适的场所,一次构建单个零件步骤。与以前的方法相反,我们的方法结合了生成步骤和房屋的检索,允许模型利用中间结论,并减轻基线编码器模型的输入大小限制。我们使用IntailmentBank数据集进行实验,在该数据集中,我们在前提检索和索引树上的现有基准优于现有的基准,总体正确性增长了约300%。
translated by 谷歌翻译
一个经过深入研究的问题是,将机器学习算法提供的置信度得分校准为地面真实概率。我们的起点是,校准似乎与班级的加权不相容,这是一种经常使用的技术,当时一个类别不那么普遍(阶级失衡)或希望实现一些外部目标(成本敏感的学习)。我们为这种不兼容提供了基于模型的解释,并使用我们的拟人化模型来生成一种从算法中恢复似然的简单方法,该算法因类权重而被误解。我们在Rajpurkar,Irvin,Zhu等的二元肺炎检测任务中验证了这种方法。(2017)。
translated by 谷歌翻译